21 research outputs found

    Multimodal sensor fusion in the latent representation space

    Get PDF
    A new method for multimodal sensor fusion is introduced. The technique relies on a two-stage process. In the first stage, a multimodal generative model is constructed from unlabelled training data. In the second stage, the generative model serves as a reconstruction prior and the search manifold for the sensor fusion tasks. The method also handles cases where observations are accessed only via subsampling i.e. compressed sensing. We demonstrate the effectiveness and excellent performance on a range of multimodal fusion experiments such as multisensory classification, denoising, and recovery from subsampled observations.Comment: Under review for Nature Scientific Report

    UWB and WiFi Systems as Passive Opportunistic Activity Sensing Radars

    Get PDF
    Human Activity Recognition (HAR) is becoming increasingly important in smart homes and healthcare applications such as assisted-living and remote health monitoring. In this paper, we use Ultra-Wideband (UWB) and commodity WiFi systems for the passive sensing of human activities. These systems are based on a receiver-only radar network that detects reflections of ambient Radio-Frequency (RF) signals from humans in the form of Channel Impulse Response (CIR) and Channel State Information (CSI). An experiment was performed whereby the transmitter and receiver were separated by a fixed distance in a Line-of-Sight (LoS) setting. Five activities were performed in between them, namely, sitting, standing, lying down, standing from the floor and walking. We use the high-resolution CIRs provided by the UWB modules as features in machine and deep learning algorithms for classifying the activities. Experimental results show that a classification performance with an F1-score as high as 95.53% is achieved using processed UWB CIR data as features. Furthermore, we analysed the classification performance in the same physical layout using CSI data extracted from a dedicated WiFi Network Interface Card (NIC). In this case, maximum F1-scores of 92.24% and 80.89% are obtained when amplitude CSI data and spectrograms are used as features, respectively

    Privacy in Multimodal Federated Human Activity Recognition

    Full text link
    Human Activity Recognition (HAR) training data is often privacy-sensitive or held by non-cooperative entities. Federated Learning (FL) addresses such concerns by training ML models on edge clients. This work studies the impact of privacy in federated HAR at a user, environment, and sensor level. We show that the performance of FL for HAR depends on the assumed privacy level of the FL system and primarily upon the colocation of data from different sensors. By avoiding data sharing and assuming privacy at the human or environment level, as prior works have done, the accuracy decreases by 5-7%. However, extending this to the modality level and strictly separating sensor data between multiple clients may decrease the accuracy by 19-42%. As this form of privacy is necessary for the ethical utilisation of passive sensing methods in HAR, we implement a system where clients mutually train both a general FL model and a group-level one per modality. Our evaluation shows that this method leads to only a 7-13% decrease in accuracy, making it possible to build HAR systems with diverse hardware.Comment: In 3rd On-Device Intelligence Workshop at MLSys 2023, 8 page

    ATG-PVD: Ticketing Parking Violations on A Drone

    Full text link
    In this paper, we introduce a novel suspect-and-investigate framework, which can be easily embedded in a drone for automated parking violation detection (PVD). Our proposed framework consists of: 1) SwiftFlow, an efficient and accurate convolutional neural network (CNN) for unsupervised optical flow estimation; 2) Flow-RCNN, a flow-guided CNN for car detection and classification; and 3) an illegally parked car (IPC) candidate investigation module developed based on visual SLAM. The proposed framework was successfully embedded in a drone from ATG Robotics. The experimental results demonstrate that, firstly, our proposed SwiftFlow outperforms all other state-of-the-art unsupervised optical flow estimation approaches in terms of both speed and accuracy; secondly, IPC candidates can be effectively and efficiently detected by our proposed Flow-RCNN, with a better performance than our baseline network, Faster-RCNN; finally, the actual IPCs can be successfully verified by our investigation module after drone re-localization.Comment: 17 pages, 11 figures and 3 tables. This paper is accepted by ECCV Workshops 202
    corecore